Towards Understanding the Smoothed Approximation Ratio of the 2-Opt Heuristic
نویسندگان
چکیده
The 2-Opt heuristic is a very simple, easy-to-implement local search heuristic for the traveling salesman problem. While it usually provides good approximations to the optimal tour in experiments, its worst-case performance is poor. In an attempt to explain the approximation performance of 2-Opt, we analyze the smoothed approximation ratio of 2-Opt. We obtain a bound of O(log(1/σ)) for the smoothed approximation ratio of 2-Opt. As a lower bound, we prove that the worst-case lower bound of Ω( logn log logn ) for the approximation ratio holds for σ = O(1/ √ n). Our main technical novelty is that, different from existing smoothed analyses, we do not separately analyze objective values of the global and the local optimum on all inputs, but simultaneously bound them on the same input. 1 2-Opt and Smoothed Analysis The traveling salesman problem (TSP) is one of the best-studied combinatorial optimization problems. Euclidean TSP is the following variant: given points X ⊆ [0, 1], find the shortest Hamiltonian cycle that visits all points in X (also called a tour). Even this restricted variant is NP-hard for d ≥ 2 [17]. While Euclidean TSP admits a polynomial-time approximation scheme [1, 16], heuristics that are simpler and easier to implement are often used in practice. A very simple and popular heuristic for finding near-optimal tours quickly is the 2-Opt heuristic: starting from an initial tour, we iteratively replace two edges by two other edges to obtain a shorter tour until we have found a local optimum. Experiments indicate that 2-Opt converges to near-optimal solutions quickly and produces solutions that are within a few percent of the optimal solution [10,11]. In contrast to its success on practical instances, 2-Opt performs poorly in the worst case: the worst-case running-time is exponential even for d = 2 [8] and its worst-case approximation ratio of O(log n) has an almost matching lower bound of Ω(log n/ log log n) for Euclidean instances [6]. In order to explain the performance of algorithms whose worst-case performance guarantee does not reflect the observed performance, smoothed analysis has been introduced [19], which is a hybrid of worst-case analysis (which is often too pessimistic) and average-case analysis (which is often dominated by c © Springer – ICALP 2015 completely random instances that have special properties not shared by typical instances). In smoothed analysis, an adversary specifies an instance, and then this instance is slightly randomly perturbed. The smoothed performance is the expected performance, where the expected value is taken over the random perturbation. The motivating assumption of smoothed analysis is that practical instances are often subjected to a small amount of random noise that can, e.g., come from measurement errors or numerical imprecision. Smoothed analysis often allows more realistic conclusions about the performance of an algorithm than mere worst-case or average-case analysis. Smoothed analysis has been applied successfully to explain the running time of the 2-Opt heuristic [8,15] as well as other local search algorithms [2,3,14]. We refer to two surveys for an overview of smoothed analysis [13,20]. Much less is known about the smoothed approximation performance of algorithms. Karger and Onak have shown that multi-dimensional bin packing can be approximated arbitrarily well for smoothed instances [12] and there are frameworks to approximate Euclidean optimization problems such as TSP for smoothed instances [4,7]. However, these approaches mostly consider algorithms tailored to solving smoothed instances. With respect to concrete algorithms, we are only aware of analyses of the jump and lex-jump heuristics for scheduling [5,9] and an upper bound of O(φ) for the smoothed approximation ratio of 2-Opt in the so-called one-step model [8]. Here, φ is an upper bound on the density functions according to which the points are drawn. Translated to Gaussian perturbation, we would obtain an upper bound of O(1/σ) if we truncate the Gaussian distribution such that all points lie in a hypercube of constant sidelength. In order to explain the practical approximation performance of 2-Opt, we provide an improved smoothed analysis of its approximation ratio. More precisely, we provide bounds on the quality of the worst local optimum, when the n data points from [0, 1] are perturbed by Gaussian distributions of standard deviation σ. Our bound of O(log(1/σ)) improves significantly upon the direct translation of the bound of Englert et al. [8] to Gaussian perturbations (see Section 3 for how to translate the bound to Gaussian perturbations). It smoothly interpolates between the average-case constant approximation ratio and the worst-case bound of O(log n). In order to obtain our improved bound for the smoothed approximation ratio, we take into account the origins of the points, i.e., their unperturbed positions. Although this information is not available to the algorithm, it can be exploited in the analysis. The smoothed analyses of approximation ratios so far [4,5,7–9,12] essentially ignored this information. While this simplifies the analysis, being oblivious to the unperturbed positions seems to be too pessimistic. In fact, we see that the bound of Englert et al. [8] cannot be improved beyond O(1/σ) by ignoring the positions of the points (Section 3). The reason for this limitation is that the lower bound for the global optimum is obtained if all points have the same origin, which corresponds to an average-case rather than a smoothed analysis. On the other hand, the upper bound for the local optimum has to
منابع مشابه
On the Smoothed Approximation Ratio of the 2-Opt Heuristic for the TSP
The 2-Opt heuristic is a simple, easy-to-implement local search heuristic for the traveling salesman problem. While it usually provides good approximations to the optimal tour in experiments, its worst-case performance is poor. In an attempt to explain the approximation performance of 2-Opt, we prove an upper bound of exp(O( √ log(1/σ))) for the smoothed approximation ratio of 2-Opt. As a lower...
متن کاملSmoothed Analysis of the 2-Opt Heuristic for the TSP: Polynomial Bounds for Gaussian Noise
The 2-opt heuristic is a very simple local search heuristic for the traveling salesman problem. While it usually converges quickly in practice, its running-time can be exponential in the worst case. In order to explain the performance of 2-opt, Englert, Röglin, and Vöcking (Algorithmica, to appear) provided a smoothed analysis in the so-called one-step model on d-dimensional Euclidean instances...
متن کاملWorst Case and Probabilistic Analysis of the 2 - Opt
2-Opt is probably the most basic and widely used local search heuristic for the TSP. This heuristic achieves amazingly good results on “real world” Euclidean instances both with respect to running time and approximation ratio. There are numerous experimental studies on the performance of 2-Opt. However, the theoretical knowledge about this heuristic is still very limited. Not even its worst cas...
متن کاملA Novel Feature-Based Approach to Characterize Algorithm Performance for the Traveling Salesman Problem
Meta-heuristics are frequently used to tackle NP-hard combinatorial optimization problems. With this paper we contribute to the understanding of the success of 2-opt based local search algorithms for solving the traveling salesman problem (TSP). Although 2-opt is widely used in practice, it is hard to understand its success from a theoretical perspective. We take a statistical approach and exam...
متن کاملEffective heuristics and meta-heuristics for the quadratic assignment problem with tuned parameters and analytical comparisons
Quadratic assignment problem (QAP) is a well-known problem in the facility location and layout. It belongs to the NP-complete class. There are many heuristic and meta-heuristic methods, which are presented for QAP in the literature. In this paper, we applied 2-opt, greedy 2-opt, 3-opt, greedy 3-opt, and VNZ as heuristic methods and tabu search (TS), simulated annealing, and pa...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015